skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zheng, Haozhen"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Efficient single instance segmentation is critical for unlocking features in on-the-fly mobile imaging applications, such as photo capture and editing. Existing mobile solutions often restrict segmentation to portraits or salient objects due to computational constraints. Recent advancements like the Segment Anything Model improve accuracy but remain computationally expensive for mobile, because it processes the entire image with heavy transformer backbones. To address this, we propose TraceNet, a one-click-driven single instance segmentation model. TraceNet segments a user-specified instance by back-tracing the receptive field of a ConvNet backbone, focusing computations on relevant regions and reducing inference cost and memory usage during mobile inference. Starting from user needs in real mobile applications, we define efficient single-instance segmentation tasks and introduce two novel metrics to evaluate both accuracy and robustness to low-quality input clicks. Extensive evaluations on the MS-COCO and LVIS datasets highlight TraceNet’s ability to generate high-quality instance masks efficiently and accurately while demonstrating robustness to imperfect user inputs. 
    more » « less
    Free, publicly-accessible full text available August 5, 2026